Documentation for Users
1.1.0
Perception Toolbox for Virtual Reality (PTVR) Manual
|
The pointing at gesture is very important in VR.
One key feature for VR users is the ability to perform actions by pointing at some well-defined distant objects. For instance, you might want to make an object grow, disappear, explode, and so on... by pointing at this object. You might also want to change many objects at once by pointing at a single specific object.
In figure 1, a subject points with her head at a vase: this is called head-contingent (aka head-controlled) pointing. In other words, the subject's gaze (not visible in the figure) might be pointing anywhere else, for instance at the floor.
Figure 1: a woman points at a vase with her head.
In this case, pointing is achieved with the head (this detection actually requires a headset - not shown in figure 1). However, pointing can be achieved with any pointing device detected by the Virtual Reality system. Currently, PTVR can detect the pointing at gesture made by the head (actually the headset), the gaze (as measured by the eyetracker), and the hands (actually the hand controllers).
There are two independent PTVR components when pointing at an object :
The flat cursor is a visual feedback that moves with the head. In figure 1, a flat cursor is present: in this case, this is a black reticle (aka crosshair) which moves with the head.
The goal of the flat cursor is to provide a visual feeback allowing the subject to align the cursor and the target (here the vase). This alignment is a very efficient way of visually checking whether an object is correctly pointed at by the pointing device (headset, handcontroller or gaze).
This component is described in details in the next 3 sub-sections.
In PTVR, the process of checking whether a given object is pointed at (or not) with a given level of accuracy is achieved thanks to a specific Event called 'PointedAt' as explained this sub-section.
When this event is triggered, we say that the object has been activated by pointing, or that pointing at this object is activated / validated.
The two components described above are independent.
This independence has the following consequences:
The next sub-sections provide many detailed explanations on these two PTVR components that allow users to point at objects.